35 research outputs found

    Visualization and Human-Machine Interaction

    Get PDF
    The digital age offers a lot of challenges in the eld of visualization. Visual imagery has been effectively used to communicate messages through the ages, to express both abstract and concrete ideas. Today, visualization has ever-expanding applications in science, engineering, education, medicine, entertainment and many other areas. Different areas of research contribute to the innovation in the eld of interactive visualization, such as data science, visual technology, Internet of things and many more. Among them, two areas of renowned importance are Augmented Reality and Visual Analytics. This thesis presents my research in the fields of visualization and human-machine interaction. The purpose of the proposed work is to investigate existing solutions in the area of Augmented Reality (AR) for maintenance. A smaller section of this thesis presents a minor research project on an equally important theme, Visual Analytics. Overall, the main goal is to identify the most important existing problems and then design and develop innovative solutions to address them. The maintenance application domain has been chosen since it is historically one of the first fields of application for Augmented Reality and it offers all the most common and important challenges that AR can arise, as described in chapter 2. Since one of the main problem in AR application deployment is reconfigurability of the application, a framework has been designed and developed that allows the user to create, deploy and update in real-time AR applications. Furthermore, the research focused on the problems related to hand-free interaction, thus investigating the area of speech-recognition interfaces and designing innovative solutions to address the problems of intuitiveness and robustness of the interface. On the other hand, the area of Visual Analytics has been investigated: among the different areas of research, multidimensional data visualization, similarly to AR, poses specific problems related to the interaction between the user and the machine. An analysis of the existing solutions has been carried out in order to identify their limitations and to point out possible improvements. Since this analysis delineates the scatterplot as a renowned visualization tool worthy of further research, different techniques for adapting its usage to multidimensional data are analyzed. A multidimensional scatterplot has been designed and developed in order to perform a comparison with another multidimensional visualization tool, the ScatterDice. The first chapters of my thesis describe my investigations in the area of Augmented Reality for maintenance. Chapter 1 provides definitions for the most important terms and an introduction to AR. The second chapter focuses on maintenance, depicting the motivations that led to choose this application domain. Moreover, the analysis concerning open problems and related works is described along with the methodology adopted to design and develop the proposed solutions. The third chapter illustrates how the adopted methodology has been applied in order to assess the problems described in the previous one. Chapter 4 describes the methodology adopted to carry out the tests and outlines the experimental results, whereas the fifth chapter illustrates the conclusions and points out possible future developments. Chapter 6 describes the analysis and research work performed in the eld of Visual Analytics, more specifically on multidimensional data visualizations. Overall, this thesis illustrates how the proposed solutions address common problems of visualization and human-machine interaction, such as interface de- sign, robustness of the interface and acceptance of new technology, whereas other problems are related to the specific research domain, such as pose tracking and reconfigurability of the procedure for the AR domain

    A preliminary study of a hybrid user interface for augmented reality applications

    Get PDF
    Augmented Reality (AR) applications are nowadays largely diffused in many fields of use, especially for entertainment, and the market of AR applications for mobile devices grows faster and faster. Moreover, new and innovative hardware for human-computer interaction has been deployed, such as the Leap Motion Controller. This paper presents some preliminary results in the design and development of a hybrid interface for hand-free augmented reality applications. The paper introduces a framework to interact with AR applications through a speech and gesture recognition-based interface. A Leap Motion Controller is mounted on top of AR glasses and a speech recognition module completes the system. Results have shown that, using the speech or the gesture recognition modules singularly, the robustness of the user interface is strongly dependent on environmental conditions. On the other hand, a combined usage of both modules can provide a more robust input

    Single view vs. multiple views scatterplots

    Get PDF
    Among all the available visualization tools, the scatterplot has been deeply analyzed through the years and many researchers investigated how to improve this tool to face new challenges. The scatterplot visualization diagram is considered one of the most functional among the variety of data visual representations, due to its relative simplicity compared to other multivariable visualization techniques. Even so, one of the most significant and unsolved challenge in data visualization consists in effectively displaying datasets with many attributes or dimensions, such as multidimensional or multivariate ones. The focus of this research is to compare the single view and the multiple views visualization paradigms for displaying multivariable dataset using scatterplots. A multivariable scatterplot has been developed as a web application to provide the single view tool, whereas for the multiple views visualization, the ScatterDice web app has been slightly modified and adopted as a traditional, yet interactive, scatterplot matrix. Finally, a taxonomy of tasks for visualization tools has been chosen to define the use case and the tests to compare the two paradigms

    Storytelling in the Metaverse: From Desktop to Immersive Virtual Reality Storyboarding

    Get PDF
    Creatives from the animation and film industries have always been experimenting with innovative tools and methodologies to improve the creation of prototypes of their visual sequences before bringing them to life. In recent years, as realistic real-time rendering techniques have emerged, the increasing popularity of virtual reality (VR) can lead to new approaches and solutions, leveraging the immersive and interactive features provided by 3D immersive experiences. A 3D desktop application and a novel storyboarding pipeline, which can automatically generate a storyboard including camera details and a textual description of the actions performed in three-dimensional environments, have already been investigated in previous work. The aim was to exploit new technologies to improve existing 3D storytelling approaches, thus providing a software solution for expert and novice storyboarders. This research investigates 3D storyboarding in immersive virtual reality (IVR) to move toward a new storyboarding paradigm. IVR systems provide peculiarities such as body-controlled exploration of the 3D scene and a head-dependant camera view that can extend features of traditional storyboarding tools. The proposed system enables users to set up the virtual stage, adding elements to the scene and exploring the environment as they build it. After that, users can select the available characters or the camera, control them in first person, position them in the scene, and perform actions selecting from a list of options, each paired with a corresponding animation. Relying on the concept of state-machine, the system can automatically generate the list of available actions depending on the context. Finally, the descriptions for each storyboard panel are automatically generated based on the history of activities performed. The proposed application maintains all the functionalities of the desktop version and can be effectively used to create storyboards in immersive virtual environments

    Prototyping industrial workstation in the Metaverse: a Low Cost Automation assembly use case

    Get PDF
    Low-cost Automation (LCA) represents a relevant use case that can benefit from a design and prototyping step experienced in Immersive Virtual Reality (IVR). LCA is a technology that automates some activities using mostly standard automation components available off-the-shelf. However, since LCA systems should adapt to existing standard production lines and workstations, workers need to customize standard LCA templates. This adaptation and customization step is usually performed on the real, physical LCA system, thus, it can be very time-consuming, and in case of errors it may be necessary to rebuild many parts from scratch. This paper investigates the usage of an Immersive Virtual Environment (IVE) as a tool for rapid and easy prototyping of LCA solutions. The proposed system loads from a digital library the 3D models of the components and provides users a set of tools to speed up the LCA system creation in a virtual room experienced through an IVR Headset. When the user completes the creation of the LCA system, it is possible to simulate its physical properties using the Unity 3D Physical Engine. Moreover, it is possible to obtain a list of all the pieces needed to build the prototype and their dimensions, to easily reproduce them in the real world. To assess the usability of the proposed system, a LCA building task has been defined, whereas users had to build a LCA solution using a template model for reference. Results show that the system usability has been highly appreciated by both skilled users and inexperienced ones

    Augmented Reality in Industry 4.0

    Get PDF
    Since the origins of Augmented Reality (AR), industry has always been one of its prominent application domains. The recent advances in both portable and wearable AR devices and the new challenges introduced by the fourth industrial revolution (renowned as industry 4.0) further enlarge the applicability of AR to improve the productiveness and to enhance the user experience. This paper provides an overview on the most important applications of AR regarding the industry domain. Key among the issues raised in this paper are the various applications of AR that enhance the user's ability to understand the movement of mobile robot, the movements of a robot arm and the forces applied by a robot. It is recommended that, in view of the rising need for both users and data privacy, technologies which compose basis for Industry 4.0 will need to change their own way of working to embrace data privacy

    Enhancing cultural tourism by a mixed reality application for outdoor navigation and information browsing using immersive devices

    Get PDF
    In this paper a mixed reality application is introduced; this application runs on Microsoft Hololens and has been designed to provide information on a city scale. The application was developed to provide information about historical buildings, thus supporting cultural outdoor tourism. The huge amount of multimedia data stored in the archives of the Italian public broadcaster RAI, is used to enrich the user experience. A remote application of image and video analysis receives an image flow by the user and identifies known objects framed in the images. The user can select the object (monument/building/artwork) for which augmented contents have to be displayed (video, text audio); the user can interact with these contents by a set of defined gestures. Moreover, if the object of interest is detected and tracked by the mixed reality application, also 3D contents can be overlapped and aligned with the real world

    Harmonize: a shared environment for extended immersive entertainment

    Get PDF
    Virtual reality (VR) and augmented reality (AR) applications are very diļ¬€use nowadays. Moreover, recent technology innovations led to the diļ¬€usion of commercial head-mounted displays (HMDs) for immersive VR: users can enjoy entertainment activities that ļ¬ll their visual ļ¬elds, experiencing the sensation of physical presence in these virtual immersive environments (IEs). Even if AR and VR are mostly used separately, they can be eļ¬€ectively combined to provide a multi-user shared environment (SE), where two or more users perform some speciļ¬c tasks in a cooperative or competitive way, providing a wider set of interactions and use cases compared to immersive VR alone. However, due to the diļ¬€erences between the two technologies, it is diļ¬ƒcult to develop SEs oļ¬€ering a similar experience for both AR and VR users. This paper presents Harmonize, a novel framework to deploy applications based on SEs with a comparable experience for both AR and VR users. Moreover, the framework is hardware-independent and it has been designed to be as much extendable to novel hardware as possible. An immersive game has been designed to test and to evaluate the validity of the proposed framework. The assessment of the system through the System Usability Scale (SUS) questionnaire and the Game Experience Questionnaire (GEQ) shows a positive evaluation

    A Comparison of Three Different NeuroTag Visualization Media: Brain Visual Stimuli by Monitor, Augmented and Virtual Reality Devices

    Get PDF
    Brain-Computer Interfaces (BCIs) proved to overcome some limitations of other input modes (e.g., gestures, voice, haptic, etc.). BCIs are able to detect the brain activity, thus identifying searched patterns. When a specific brain activity is recognized, a well-defined action can be triggered, thus implementing a human-machine interaction paradigm. BCIs can be used in different domains ranging from industry to services for impaired people. This paper considers BCIs that can be designed and developed by the NextMind, which is a small and ergonomics device to capture the activity of the visual cortex. Objects called NeuroTags can be inserted in both 2D and 3D scenes; these objects act like switches when the user is able to focus on them. The aim of this work is to evaluate different NeuroTag configurations (varying in terms of size and distance) as well as different visualization devices: a monitor, a virtual reality head-mounted display, and an augmented reality head-mounted display. User tests outline that the best tradeoff between robustness and selection speed is obtained by medium-size and medium-spaced NeuroTags; on the other hand, monitor visualization outperforms the AR solution, whereas it is not possible to identify statistically significant differences between monitor-VR and AR-VR

    An Evaluation of Game Usability in Shared Mixed and Virtual Environments

    Get PDF
    Augmented reality (AR) and virtual reality (VR) technologies are increasingly becoming more pervasive and important in the information technology area. Thanks to the technological improvements, desktop interfaces are being replaced by immersive VR devices that offer a more compelling game experience. AR games have started to draw attention of researchers and companies for their ability to exploit both the real and virtual environments. New fascinating challenges are generated by the possibility of designing hybrid games that allow several users to access shared environments exploiting the features of both AR and VR devices. However, the user experience and usability can be affected by several parameters, such as the field of view (FoV) of the employed devices or the realism of the scene. The work presented in this chapter aims to assess the impact of the FoV on the usability of the interfaces in a first-person shooter game. Two players, interacting with AR (first player) and VR (second player) devices, can fight each other in a large game environment. Although we cannot ascertain that different FoVs have affected the game usability, users considered the narrow FoV interfaces to be less usable, even though they could freely move around the real environment
    corecore